Computer Science > Machine Learning
[Submitted on 24 Jan 2023 (v1), last revised 1 May 2024 (this version, v4)]
Title:A Watermark for Large Language Models
View PDF HTML (experimental)Abstract:Potential harms of large language models can be mitigated by watermarking model output, i.e., embedding signals into generated text that are invisible to humans but algorithmically detectable from a short span of tokens. We propose a watermarking framework for proprietary language models. The watermark can be embedded with negligible impact on text quality, and can be detected using an efficient open-source algorithm without access to the language model API or parameters. The watermark works by selecting a randomized set of "green" tokens before a word is generated, and then softly promoting use of green tokens during sampling. We propose a statistical test for detecting the watermark with interpretable p-values, and derive an information-theoretic framework for analyzing the sensitivity of the watermark. We test the watermark using a multi-billion parameter model from the Open Pretrained Transformer (OPT) family, and discuss robustness and security.
Submission history
From: John Kirchenbauer [view email][v1] Tue, 24 Jan 2023 18:52:59 UTC (3,550 KB)
[v2] Fri, 27 Jan 2023 18:54:34 UTC (3,620 KB)
[v3] Tue, 6 Jun 2023 17:50:01 UTC (3,618 KB)
[v4] Wed, 1 May 2024 22:04:31 UTC (3,825 KB)
Current browse context:
cs.LG
References & Citations
export BibTeX citation
Loading...
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)